05. Perception Pipeline

Perception Pipeline

You will be pleased to know that you have already completed about 75% of this project by finishing the RoboND-Perception-Exercises! All you need to do for a passing submission on this project is to implement your perception pipeline in a new environment with new objects and output some ROS messages telling the robot where to find them.

Creating the perception pipeline

Start out by creating a ROS node just like you did in the exercises. In the pr2_robot/scripts/ directory you'll find a file called project_template.py, where you can move over all your code from Exercise-3 (or start from scratch if you like).

First, you'll need to change your subscriber (or create a new subscriber) to subscribe to the camera data (point cloud) topic /pr2/world/points.

To make sure that everything worked fine, try publishing the same data on your own topic and view it in RViz. To refresh your memory on how to do this, follow the tutorial in the Publish Your Point Cloud section.

Initial point cloud with noise.

Initial point cloud with noise.

Filtering

Once you have your camera data, start out by applying various filters you have looked at so far. Keep in mind you'll have to tweak the parameters to accommodate this new environment. Also, this new dataset contains noise! To clean it up, the first filter you should apply is the statistical outlier filter.

Note, the statistical outlier filter in python-pcl was broken for a time and was only fixed after the exercises came out, so if you're getting an error when running the statistical outlier filter that looks like this:

Error: TypeError: __cinit__() takes exactly 1 positional argument (0 given)

You'll need to git pull the RoboND-Perception-Exercises repo to get the latest updates, then re-install python-pcl:

$ cd ~/RoboND-Perception-Exercises/python-pcl
$ python setup.py build
$ sudo python setup.py install

For a reminder on the filters covered in Exercise-1 have a look here

Your point cloud after statistical outlier filtering

Your point cloud after statistical outlier filtering

Table Segmentation

Next, perform RANSAC plane fitting to segment the table in the scene. Much like you did in a previous lesson, to separate the objects from the table.

Clustering

Use the Euclidean Clustering technique described here to separate the objects into distinct clusters, thus completing the segmentation process.

Object Recognition

For this project, as you already saw in the previous section, you have a variety of different objects to identify. Essentially, there are three different worlds or scenarios that you are going to work with where each scenario has different items on the table in front of the robot. These worlds are located in the /pr2_robot/worlds/ folder, namely the test_*.world files.

By default, you start with the test1.world but you can modify that in the pick_place_project.launch file in the /pr2_robot/launch/ folder:

  <!--Launch a gazebo world-->
  <include file="$(find gazebo_ros)/launch/empty_world.launch">
    <!--TODO:Change the world name to load different tabletop setup-->
    <arg name="world_name" value="$(find pr2_robot)/worlds/test1.world"/>
    <arg name="use_sim_time" value="true"/>
    <arg name="paused" value="false"/>
    <arg name="gui" value="true"/>
    <arg name="headless" value="false"/>
    <arg name="debug" value="false"/>
  </include>

Test3 world with all 8 items

Test3 world with all 8 items

You can now complete the object recognition steps you performed in Exercise-3 including:

  • Generate a training set of features for the objects in your pick lists (see the pick_list_*.yaml files in /pr2_robot/config/). Each pick list corresponds to a world and thus indicates what items will be present in that scenario. To generate the training set, you will have to modify the models list in the capture_features.py script and run it as you did for Exercise-3:
if __name__ == '__main__':
    rospy.init_node('capture_node')

    # Modify following list with items from pick_list_*.yaml
    models = [\
       'beer',
       'bowl',
       'create',
       'disk_part',
       'hammer',
       'plastic_cup',
       'soda_can']
  • Train your SVM with features from the new models.
  • Add your object recognition code to your perception pipeline.
  • Test with the actual project scene to see if your recognition code is successful.

To test with the project, first run:

$ roslaunch pr2_robot pick_place_project.launch

and then,

$ rosrun pr2_robot project_template.py

You should arrive at a result similar to what you got in Exercise-3 but this time for new objects in a new environment!